24 research outputs found

    The optimization of medical X-ray images

    Get PDF
    The current paper is concerned on one of the most important branch of medical picture modality, the X-ray. The spread of making digital X-ray images nowadays has great importance, because it has numerous advantages in contrast with the traditional image shooting process. However, some of its disadvantageous features means great restraining power, but these can be cured-even if not easily-in the world of informatics. Such disadvantage, for instance, is the picture definition, the contrast, bit-depth, and in case of digital technique, the human factor should be taken into consideration as well, because the human eye restricts the visualization in great extent. Regarding these problems, the article is about the introduction of such image optimization process, by which the usability of the X-ray pictures can be corrected with improving the picture quality

    High resolution digital holographic microscope and image reconstruction

    Get PDF

    Supporting Smart System applications in Scientific Gateway environment

    Get PDF

    The CMS Event Builder

    Full text link
    The data acquisition system of the CMS experiment at the Large Hadron Collider will employ an event builder which will combine data from about 500 data sources into full events at an aggregate throughput of 100 GByte/s. Several architectures and switch technologies have been evaluated for the DAQ Technical Design Report by measurements with test benches and by simulation. This paper describes studies of an EVB test-bench based on 64 PCs acting as data sources and data consumers and employing both Gigabit Ethernet and Myrinet technologies as the interconnect. In the case of Ethernet, protocols based on Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies, including measurements on throughput and scaling are presented. The architecture of the baseline CMS event builder will be outlined. The event builder is organised into two stages with intelligent buffers in between. The first stage contains 64 switches performing a first level of data concentration by building super-fragments from fragments of 8 data sources. The second stage combines the 64 super-fragments into full events. This architecture allows installation of the second stage of the event builder in steps, with the overall throughput scaling linearly with the number of switches in the second stage. Possible implementations of the components of the event builder are discussed and the expected performance of the full event builder is outlined.Comment: Conference CHEP0

    Run Control and Monitor System for the CMS Experiment

    Full text link
    The Run Control and Monitor System (RCMS) of the CMS experiment is the set of hardware and software components responsible for controlling and monitoring the experiment during data-taking. It provides users with a "virtual counting room", enabling them to operate the experiment and to monitor detector status and data quality from any point in the world. This paper describes the architecture of the RCMS with particular emphasis on its scalability through a distributed collection of nodes arranged in a tree-based hierarchy. The current implementation of the architecture in a prototype RCMS used in test beam setups, detector validations and DAQ demonstrators is documented. A discussion of the key technologies used, including Web Services, and the results of tests performed with a 128-node system are presented.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 8 pages, PSN THGT00

    The CMS High Level Trigger

    Full text link
    At the Large Hadron Collider at CERN the proton bunches cross at a rate of 40MHz. At the Compact Muon Solenoid experiment the original collision rate is reduced by a factor of O (1000) using a Level-1 hardware trigger. A subsequent factor of O(1000) data reduction is obtained by a software-implemented High Level Trigger (HLT) selection that is executed on a multi-processor farm. In this review we present in detail prototype CMS HLT physics selection algorithms, expected trigger rates and trigger performance in terms of both physics efficiency and timing.Comment: accepted by EPJ Nov 200

    Semi-shared storage subsystem for OpenNebula

    No full text
    To address the limitations of OpenNebula storage subsystems, we have designed and developed an extension that is capable of achieving higher I/O throughput than the prior subsystems. The semi-shared storage subsystem uses central and distributed resources at the same time. Virtual machine instances with high availability requirements can run directly from central storage while other virtual machines can use local resources. As I/O performance measurements show, this technique can decrease I/O load on central storage by using local resources of host machines
    corecore